2 research outputs found
Function Approximation with Randomly Initialized Neural Networks for Approximate Model Reference Adaptive Control
Classical results in neural network approximation theory show how arbitrary
continuous functions can be approximated by networks with a single hidden
layer, under mild assumptions on the activation function. However, the
classical theory does not give a constructive means to generate the network
parameters that achieve a desired accuracy. Recent results have demonstrated
that for specialized activation functions, such as ReLUs and some classes of
analytic functions, high accuracy can be achieved via linear combinations of
randomly initialized activations. These recent works utilize specialized
integral representations of target functions that depend on the specific
activation functions used. This paper defines mollified integral
representations, which provide a means to form integral representations of
target functions using activations for which no direct integral representation
is currently known. The new construction enables approximation guarantees for
randomly initialized networks for a variety of widely used activation
functions